Keynotes
We are pleased to announce the following keynote speakers for CVMP 2024:
Aljosa Smolic, Lucerne University of Applied Sciences and Arts
AI-based Volumetric Content Creation for Immersive XR Experiences and Production Workflows
Aljosa Smolic is Professor in the Computer Science Department of the Lucerne University of Applied Sciences and Arts in Switzerland and Co-Head of the Immersive Realities Research Lab. Before he was Professor of Creative Technologies at Trinity College Dublin heading the research group V-SENSE, Senior Research Scientist and Group Leader at Disney Research Zurich, and Scientific Project Manager and Group Leader at Fraunhofer HHI. He is also a Co-Founder of the company Volograms, which commercializes volumetric video technology. Prof. Smolic’s expertise is in the broad area of visual computing (covering image/video processing, computer vision, computer graphics) with a focus on immersive XR technologies. He published 250+ scientific papers and book chapters, holds 35+ patents and received several awards and recognitions for his research.
Shunsuke Saito, Reality Labs Research, Meta
Foundations for 3D Digital Humans
What constitutes the foundation for 3D digital hand avatars? In this talk, we aim to establish the essential components necessary for creating high-fidelity digital human models. We argue that relighting, animation/interaction, and in-the-wild generalization are crucial for bringing high-quality avatars to everyone. We will discuss several relightable appearance representations that achieve a photorealistic appearance under various lighting conditions. Furthermore, we will introduce techniques to effectively model animation and interaction priors. Finally, the talk will cover bridging the domain gap between high-quality studio data and large-scale in-the-wild data via a human-centric foundational model called Sapiens, which is key to enhancing robustness and diversity in avatar modeling algorithms. We will also explore how these foundations can complement and enhance each other.
Shunsuke Saito is a Research Scientist at Meta Reality Labs Research in Pittsburgh, where he leads the effort on next generation digital humans. He obtained his PhD degree at the University of Southern California. Prior to USC, he was a Visiting Researcher at University of Pennsylvania in 2014. He obtained his BE (2013), ME (2014) in Applied Physics at Waseda University. His research lies in the intersection of computer graphics, computer vision and machine learning, especially centered around digital human, 3D reconstruction, and performance capture. His work has been published in SIGGRAPH, SIGGRAPH Asia, NeurIPS, ECCV, ICCV and CVPR, three of which have been nominated for CVPR Best Paper Award (2019, 2021) and ECCV Best Paper Award (2024). His real-time volumetric teleportation work also won Best in Show award in SIGGRAPH 2020 Real-time Live!.
Ana Serrano, Universidad de Zaragoza
Understanding user behavior and attention in immersive environments
Virtual reality (VR) is an exciting and rapidly growing medium that presents both challenges and opportunities. As VR techniques and applications continue to blossom, creating engaging experiences that exploit their potential becomes increasingly important. Understanding and being able to reliably predict human visual behavior is an essential factor in achieving this goal. This knowledge can be the key to designing more engaging storytelling experiences and developing efficient content-aware compression and rendering techniques that take into account users’ attentional patterns and behavior. In this talk, we will explore approaches and challenges involved in modeling visual attention and gaze behavior in immersive 360º environments. By studying how users allocate their attention and direct their gaze, we can uncover valuable insights for creating immersive VR experiences.
Ana Serrano is an Associate Professor at Universidad de Zaragoza (Spain). Previously she was a Postdoctoral Research Fellow at the Max-Planck-Institute for Informatics. She received her PhD in Computer Science in 2019. Her doctoral thesis was recognized with one of the Eurographics 2020 PhD awards and she was recognized with the Eurographics Young Researcher Award in 2023 and the VGTC Significant New Researcher Award in 2024. Her research focuses on various areas of visual computing, including computational imaging, material appearance perception and editing, and virtual reality. She is particularly interested in using perceptually-driven approaches to improve user experiences and develop tools to assist content creation.
Sarah Ellis, Royal Shakespeare Company
Sarah Ellis is an award-winning producer currently working as Director of Digital Development for the Royal Shakespeare Company to explore new artistic initiatives and partnerships. The latest partnership for the RSC is the Audience of the Future Live Performance Demonstrator funded by Innovate UK - a consortium consisting of arts organisations, research partners and technology companies to explore the future of performances and real-time immersive experiences.As a spoken word producer, she has worked with the Old Vic Tunnels, Battersea Arts Centre, Birmingham REP, Contact, Improbable, Southbank Centre, Soho Theatre, and Shunt. She has been Head of Creative Programmes at the Albany Theatre and Programme Manager for Apples & Snakes.She is a regular speaker and commentator on digital arts practice, as well as an Industry Champion for the Creative Industries Policy and Evidence Centre, which helps inform academic research on the creative industries to lead to better policies for the sector. She has been appointed Chair of digital agency, The Space, established by Arts Council England and the BBC to help promote digital engagement across the arts.